11 research outputs found

    Obstacle Detection Based on Fusion Between Stereovision and 2D Laser Scanner

    Get PDF
    International audienceObstacle detection is an essential task for mobile robots. This subject has been investigated for many years by researchers and a lot of obstacle detection systems have been proposed so far. Yet designing an accurate and totally robust and reliable system remains a challenging task, above all in outdoor environments. Thus, the purpose of this chapter is to present new techniques and tools to design an accurate, robust and reliable obstacle detection system in outdoor environments based on a minimal number of sensors. So far, experiments and assessments of already developed systems show that using a single sensor is not enough to meet the requirements: at least two complementary sensors are needed. In this chapter a stereovision sensor and a 2D laser scanner are considered

    Stereovision-based 3D lane detection system: a model driven approach

    No full text
    A new stereovision-based method for the road lane detection and 3D geometry estimation is presented in this paper. The proposed approach is based on a recognition algorithm driven by a statistical model of the 3D road lane, projected in both stereoscopic images. First, the model is initialized thanks to a training stage. The model is then updated iteratively, from successively extracted image features. After each iteration, the detection of the next features, in any of the two images of the stereoscopic pair, is driven by the features already detected. The parameters of the road lane, such as width, horizontal and vertical curvature, roll, pitch, and yaw angles, are estimated. The variance of each parameter is also estimated, and is minimized through the estimation process. Unlike previous proposed approaches, no disparity map is required : the matching of the image features is directly obtained as a result of the model update. Thus, computing time is low. Experiments from computer-generated and real images are carried out to assess the efficiency and accuracy of the method

    D.; Free space estimation for autonomous navigation

    Get PDF
    Abstract. One of the issues in autonomous navigation is the free space estimation. This paper presents an original framework and a method for the extraction of such an area by using a stereovision system. The v-disparity algorithm [11] is extended to provide a reliable and precise road profile on all types of roads. The free space is estimated by classifying the pixels of the disparity map. This classification is performed by using the road profile and the u-disparity image. Each stage of the algorithm is presented and experimental results are shown.

    A model driven 3D lane detection system using stereovision

    No full text
    This paper presents a new method for the detection and 3D reconstruction of the road lane using onboard stereovision. The proposed algorithm makes it possible to overcome the assumptions commonly used in most of the detection systems using monocular vision such as: flat road, constant pitch angle or absence of roll angle. The proposed method of detection and 3D reconstruction is based on two modules. The first one is aimed at detecting the road markings of the lane in each image of the stereoscopic pair. It is based on a recognition algorithm driven by a statistical model of the road lane: the initial state of the model is obtained after a learning stage; it is updated using the results of a feature extraction stage. Our model takes into account the intrinsic links between the projections of the road in the two images of the stereoscopic pair, so that its update from a feature extracted in one of two images drives the detection of the features in the other image. No disparity map is required since the matching of the road features is directly obtained as the result of the model update. The second module is aimed at 3D reconstruction and relative localization of the vehicle with respect to its lane. The parameters of a 3D surface including the horizontal and vertical profiles of the road lane are estimated from the road borders previously detected. The robustness of the algorithm is evaluated from synthetic and real images

    Perception through Scattering Media for Autonomous Vehicles in Autonomous Robots Research Advances

    No full text
    The perception of the environment is a fundamental task for autonomous robots. Unfortunately, the performances of the vision systems are drastically altered in presence of bad weather, especially fog. Indeed, due to the scattering of light by atmospheric particles, the quality of the light signal is reduced, compared to what it is in clean air. Detecting and quantifying these degradations, even identifying their causes, should make it possible to estimate the operating range of the vision systems and thus constitute a kind of self-diagnosis system. In parallel, it should be possible to adapt the operation of the sensors, to improve the quality of the signal and to dynamically adjust the operation range of the associated processings. First, we introduce some knowledge about atmospheric optics and study the behavior of existing exteroceptive sensors in scattering media. Second, we explain how existing perception systems can be used and cooperate to derive some descriptors of the visibility conditions. In particular, we show how to detect fog presence and to estimate the visibility range. Once weather conditions have been determined, they can be exploited to reduce the impact of adverse weather conditions on the operation of vision systems. We propose thus different solutions to enhance the performances of vision algorithms under foggy weather. Finally, we tackle the problem to know if light scattering could be turned to our advantage, allowing us to develop novel perception algorithms. Experiments in real situations are presented to illustrate the developments. Limits of the systems, future challenges and trends are discussed

    Perception through Scattering Media for Autonomous Vehicles in Autonomous Robots Research Advances

    No full text
    The perception of the environment is a fundamental task for autonomous robots. Unfortunately, the performances of the vision systems are drastically altered in presence of bad weather, especially fog. Indeed, due to the scattering of light by atmospheric particles, the quality of the light signal is reduced, compared to what it is in clean air. Detecting and quantifying these degradations, even identifying their causes, should make it possible to estimate the operating range of the vision systems and thus constitute a kind of self-diagnosis system. In parallel, it should be possible to adapt the operation of the sensors, to improve the quality of the signal and to dynamically adjust the operation range of the associated processings. First, we introduce some knowledge about atmospheric optics and study the behavior of existing exteroceptive sensors in scattering media. Second, we explain how existing perception systems can be used and cooperate to derive some descriptors of the visibility conditions. In particular, we show how to detect fog presence and to estimate the visibility range. Once weather conditions have been determined, they can be exploited to reduce the impact of adverse weather conditions on the operation of vision systems. We propose thus different solutions to enhance the performances of vision algorithms under foggy weather. Finally, we tackle the problem to know if light scattering could be turned to our advantage, allowing us to develop novel perception algorithms. Experiments in real situations are presented to illustrate the developments. Limits of the systems, future challenges and trends are discussed
    corecore